It’s hard to get through a day in analytics now without hearing the words interpretability and explainability. These terms have become important in a world where machine learning and artificial intelligence (AI) models are becoming more ubiquitous. However, what do the two terms mean—and more importantly, why do they matter?

What is interpretability and explainability?

Interpretability is determining how an analytical model or algorithm came to its conclusions. When a model is easily interpretable, it is possible to understand what the model used to make its predictions: the inputs and the processes involved.

Some models are easier to understand. For example, decision trees and linear regression models are intrinsically more evident than a machine learning model. However, interpretability is also a function of complexity. A model with fewer inputs is likely to be more interpretable.

Interpretability vs Explainability: The Black Box of Machine Learning
Interpretability vs Explainability: The Black Box of Machine Learning

Explainability is why an algorithm produces those responses. It, therefore, considers issues like the weighting of each variable within the model to assess their relative importance in answering. It is, if you like, the science behind the model. The process within the model may remain a mystery—but we understand why the answer has been provided.

It is essential to understand that neither interpretability nor explainability solely applies to machine learning and AI models. They apply to any analytical model. It’s just that machine learning models are more likely to be less interpretable and less explainable than many other models. This is a product of their nature: because they learn from the data, they can sometimes draw the wrong conclusions.

Why should we care?

Perhaps the biggest question about interpretability and explainability is why we should care. Why do we need to know about either of them?

There are several possible answers to this. The first one is the regulatory route. Regulations worldwide, including the European General Data Protection Regulation (GDPR), state that people have a right to understand the reasons for decisions made about or affecting them. Therefore, organisations legally cannot use analytical models to support decision-making if they do not understand those models—which means those models need to be both interpretable and explainable.

The second reason is trust and cost. The cloud and access to the software- or analytics-as-a-service have reduced the upfront cost of analytics investments. It has enabled rapid scaling up or down to meet demand. However, there is still a cost. Organisations want the benefits from machine learning or AI algorithms, but they need to know they will get good value from any investment.

This means that they need to be confident that they will be able to rely on the outcomes from their algorithms. That, in turn, requires those algorithms to be interpretable and explainable. To trust the outputs from the algorithms or models, everybody in the organisation needs to know how and why those algorithms generate their results.

Perhaps even more importantly, monitoring and auditing models to ensure they continue generating the ‘right’ results is essential. In this context, ‘right’ means accurate, not ‘giving you the answers you want. To be able to audit, you must again understand how and why decisions are made.

A matter of principle?

The desired endpoint of adopting analytics is to become entirely data-driven. However, this is only possible if you know that you can rely on the data and the models or algorithms using that data. This makes perfect sense if you consider using algorithms to support diagnostics or treatment decisions in medicine. Not every business decision has life-and-death consequences, of course. However, many do have significant implications for individuals. A lack of trust in models risks people losing faith in them and going back to ‘gut instinct’ as a basis for decisions.

No organisation can afford to have ‘black boxes’ making decisions. AI and machine learning algorithms must be interpretable and explainable to form a basis for data-driven decisions. This is the bottom line. And it is both a matter of common sense and an important principle for any organisation. Therefore, when selecting or developing AI and machine learning models, it is essential to consider both interpretability and explainability. You cannot safely jump on the AI bandwagon without ticking those two boxes.

How to adopt responsible AI within your data journey

It comes down to the Data Scientist demonstrating the organisation’s use of AI and data, from how it developed to how it’s governed. Adopting responsible AI is like a blanket that covers the entire journey. This can be embraced in every phase, from data requirements gathering to continuous improvement of your production cycle. The need for Responsible AI can boost your AI security with validation and monitoring. Help create systems that are ethical, understandable, and legal. It is recommended to start building your customised Responsible AI toolkit to tackle any challenges around bias and fairness within your model lifecycle.

Learn more about data ethics and responsible AI at SAS.

Share

About Author

Prathiba Krishna

Data Scientist, SAS

Prathiba is an experienced Data Scientist with a rich background in the Insurance industry. With a Master’s degree in Operational Research with Applied Statistics and Risk, her passion takes form through seeing the varying applications of Machine Learning and AI techniques, and how they propel data scientists to build better models and solutions. Skilled in data analysis and modelling, she utilizes SAS software and Open Source to assess and address problems within enterprise organizations.

1 Comment

Leave A Reply

Back to Top